Call to Vigilance
This is the final piece in the “most important century” series, which has argued that there’s a high probability[1] that the coming decades will see:
The development of a technology like PASTA (process for automating scientific and technological advancement).
A resulting productivity explosion leading to development of further transformative technologies.
The seed of a stable galaxy-wide civilization, possibly featuring digital people, or possibly run by misaligned AI.
When trying to call attention to an underrated problem, it’s typical to close on a “call to action”: a tangible, concrete action readers can take to help.
But this is challenging, because as I argued previously, there are a lot of open questions about what actions are helpful vs. harmful. (Although we can identify some actions that seem robustly helpful today.)
This makes for a somewhat awkward situation. When confronting the “most important century” hypothesis, my attitude doesn’t match the familiar ones of “excitement and motion” or “fear and avoidance.” Instead, I feel an odd mix of intensity, urgency, confusion and hesitance. I’m looking at something bigger than I ever expected to confront, feeling underqualified and ignorant about what to do next. This is a hard mood to share and spread, but I’m trying.
So instead of a call to action, I want to make a call to vigilance. If you’re convinced by the arguments in this piece, then don’t rush to “do something” and then move on. Instead, take whatever robustly good actions you can today, and otherwise put yourself in a better position to take important actions when the time comes.
This could mean:
Finding ways to interact more with, and learn more about, key topics/fields/industries such as AI (for obvious reasons), science and technology generally (as a lot of the “most important century” hypothesis runs through an explosion in scientific and technological advancement), and relevant areas of policy and national security.
Taking opportunities (when you see them) to move your career in a direction that is more likely to be relevant (some thoughts of mine on this are here; also see 80,000 Hours).
Connecting with other people interested in these topics (I believe this has been one of the biggest drivers of people coming to do high-impact work in the past). Currently, I think the effective altruism community is the best venue for this, and you can learn about how to connect with people via the Centre for Effective Altruism (see the “Get involved” dropdown). If new ways of connecting with people come up in the future, I will likely post them on Cold Takes.
And of course, taking any opportunities you see for robustly helpful actions.
Buttons you can click
Here’s something you can do right now that would be genuinely helpful, though maybe not as viscerally satisfying as signing a petition or making a donation.
In my day job, I have a lot of moments where I—or someone I’m working with—is looking for a particular kind of person (perhaps to fill a job opening with a grantee, or to lend expertise on some topic, or something else). Over time, I expect there to be more and more opportunities for people with specific skills, interests, expertise, etc. to take actions that help make the best of the most important century. And I think a major challenge will simply be knowing who’s out there—who’s interested in this cause, and wants to help, and what skills and interests they have.
If you’re a person we might wish we could find in the future, you can help now by sending in information about yourself via this simple form. I vouch that your information won’t be sold or otherwise used to make money, that your communication preferences (which the form asks about in detail) will be respected, and that you’ll always be able to opt out of any communications.
Sharing a headspace
In This Can’t Go On, I analogized the world to people on a plane blasting down the runway, without knowing why they’re moving so fast or what’s coming next:
As someone sitting on this plane, I’d love to be able to tell you I’ve figured out exactly what’s going on and what future we need to be planning for. But I haven’t.
Lacking answers, I’ve tried to at least show you what I do see:
Dim outlines of the most important events in humanity’s past or future.
A case that they’re approaching us more quickly than it seems—whether or not we’re ready.
A sense that the world and the rules we’re all used to can’t be relied on. That we need to lift our gaze above the daily torrent of tangible, relatable news—and try to wrap our heads around weirder, wilder matters that are more likely to be seen as the headlines about this era billions of years from now.
There’s a lot I don’t know. But if this is the most important century, I do feel confident that we as a civilization aren’t yet up to the challenges it presents.
If that’s going to change, it needs to start with more people seeing the situation for what it is, taking it seriously, taking action when they can—and when not, staying vigilant.
This work is licensed under a Creative Commons Attribution 4.0 International License.
“I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100).” ↩︎
- Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) by 10 May 2023 19:04 UTC; 255 points) (LessWrong;
- Spreading messages to help with the most important century by 25 Jan 2023 20:35 UTC; 128 points) (
- Good job opportunities for helping with the most important century by 18 Jan 2024 19:21 UTC; 46 points) (
- The Most Important Century: Sequence Introduction by 3 Sep 2021 8:10 UTC; 19 points) (
- 慎重さの求め by 21 Aug 2023 11:31 UTC; 2 points) (
- Un appello a rimanere vigili by 18 Jan 2023 11:38 UTC; 1 point) (
Do you have an opinion on the second-best venue for people interested in these issues to find community?
Not particularly, sorry! There are communities that don’t necessarily identify as “effective altruist” but are highly concerned with reducing potential risks from advanced AI, but I’m guessing you’re already familiar with these (e.g., some people/organizations connected to or inspired by MIRI).
Great series! One question I had: Why did you consistently refer to a “stable galaxy-wide civilization” throughout the series as opposed to a civilization of greater scale? Was this wording used merely for simplicity of communication or do you have some reason to think that a civilization spanning multiple galaxies is significantly less likely?
Thanks! I just used “galaxy” for convenience—it was easy to estimate certain figures for our galaxy (such as how long it would take to reach its outer limits), and I think “galaxy” gives a sufficient picture of the potential scale I’m envisioning. I do think it’s possible to keep going beyond the galaxy, though at some point (beyond this galaxy) I’d expect to encounter another spacefaring civilization with a different origin, and getting into that could complicate some of the statements and calculations illustrating that civilization could get very large and last very long.